Web Crawler: Extracting the Web Data

نویسنده

  • Mini Singh Ahuja
چکیده

Internet usage has increased a lot in recent times. Users can find their resources by using different hypertext links. This usage of Internet has led to the invention of web crawlers. Web crawlers are full text search engines which assist users in navigating the web. These web crawlers can also be used in further research activities. For e.g. the crawled data can be used to find missing links, community detection in complex networks. In this paper we have reviewed web crawlers: their architecture, types and various challenges being faced when search engines use the web crawlers. Keywords—web crawler, blind traversal algorithms, best first heuristic algorithms etc.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Prioritize the ordering of URL queue in Focused crawler

The enormous growth of the World Wide Web in recent years has made it necessary to perform resource discovery efficiently. For a crawler it is not an simple task to download the domain specific web pages. This unfocused approach often shows undesired results. Therefore, several new ideas have been proposed, among them a key technique is focused crawling which is able to crawl particular topical...

متن کامل

Analyzing new features of infected web content in detection of malicious web pages

Recent improvements in web standards and technologies enable the attackers to hide and obfuscate infectious codes with new methods and thus escaping the security filters. In this paper, we study the application of machine learning techniques in detecting malicious web pages. In order to detect malicious web pages, we propose and analyze a novel set of features including HTML, JavaScript (jQuery...

متن کامل

Domain-Specific Web Site Identification: The CROSSMARC Focused Web Crawler

This paper presents techniques for identifying domain specific web sites that have been implemented as part of the EC-funded R&D project, CROSSMARC. The project aims to develop technology for extracting interesting information from domain-specific web pages. It is therefore important for CROSSMARC to identify web sites in which interesting domain specific pages reside (focused web crawling). Th...

متن کامل

Research on Model of Network Information Extraction Based on Improved Topic-focused Web Crawler Key Technology

Original scientific paper This research has caught researchers' wide attention for extracting network information exactly with the arrival of the big data era characterized by semistructured or unstructured text. This paper proposes a model of network information extraction based on improved topic-focused web crawler key technology taking Web news as object of extraction. The authors elaborate ...

متن کامل

An Effective fuzzy Clustering Algorithm for Web Document Classification: a Case Study in Cultural Content Mining

This article presents a novel crawling and clustering method for extracting and processing cultural data from the web in a fully automated fashion. Our architecture relies upon a ‘focused’ web crawler to download web documents relevant to culture. The term ‘focused crawler’ refers to web crawlers that search and process only those web pages that are relevant to a particular topic. After downloa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014